In recent years, multi-scale generative adversarial networks (GANs) have been proposed to build generalized image processing models based on single sample. Constraining on the sample size, multi-scale GANs have much difficulty converging to the global optimum, which ultimately leads to limitations in their capabilities. In this paper, we pioneered the introduction of PAC-Bayes generalized bound theory into the training analysis of specific models under different adversarial training methods, which can obtain a non-vacuous upper bound on the generalization error for the specified multi-scale GAN structure. Based on the drastic changes we found of the generalization error bound under different adversarial attacks and different training states, we proposed an adaptive training method which can greatly improve the image manipulation ability of multi-scale GANs. The final experimental results show that our adaptive training method in this paper has greatly contributed to the improvement of the quality of the images generated by multi-scale GANs on several image manipulation tasks. In particular, for the image super-resolution restoration task, the multi-scale GAN model trained by the proposed method achieves a 100% reduction in natural image quality evaluator (NIQE) and a 60% reduction in root mean squared error (RMSE), which is better than many models trained on large-scale datasets.
translated by 谷歌翻译
气道分割对于检查,诊断和预后的肺部疾病至关重要,而其手动描述则不当。为了减轻这种耗时且潜在的主观手动程序,研究人员提出了从计算机断层扫描(CT)图像自动分割气道的方法。但是,一些小型气道分支(例如,支气管和终末支气管)显着加剧了通过机器学习模型的自动分割难度。特别是,气道分支中体素值和严重的数据失衡的方差使计算模块容易导致不连续和假阴性预测。注意机制表明了分割复杂结构的能力,而模糊逻辑可以减少特征表示的不确定性。因此,由模糊注意力层给出的深度注意力网络和模糊理论的整合应该是升级的解决方案。本文提出了一种有效的气道分割方法,包括一个新型的模糊注意力神经网络和全面的损失函数,以增强气道分割的空间连续性。深层模糊集由特征图中的一组体素和可学习的高斯成员功能制定。与现有的注意机制不同,所提出的特异性模糊注意力解决了不同渠道中异质特征的问题。此外,提出了一种新的评估指标来评估气道结构的连续性和完整性。该方法的效率已通过在包括精确的09和LIDC数据集在内的开放数据集上进行测试,以及我们的内部Covid-19和纤维化肺病数据集证明了这一建议的效率。
translated by 谷歌翻译
人类生理学中的各种结构遵循特异性形态,通常在非常细的尺度上表达复杂性。这种结构的例子是胸前气道,视网膜血管和肝血管。可以观察到可以观察到可以观察到可以观察到可以观察到空间排列的磁共振成像(MRI),计算机断层扫描(CT),光学相干断层扫描(OCT)等医学成像模式(MRI),计算机断层扫描(CT),可以观察到空间排列的大量2D和3D图像的集合。这些结构在医学成像中的分割非常重要,因为对结构的分析提供了对疾病诊断,治疗计划和预后的见解。放射科医生手动标记广泛的数据通常是耗时且容易出错的。结果,在过去的二十年中,自动化或半自动化的计算模型已成为医学成像的流行研究领域,迄今为止,许多计算模型已经开发出来。在这项调查中,我们旨在对当前公开可用的数据集,细分算法和评估指标进行全面审查。此外,讨论了当前的挑战和未来的研究方向。
translated by 谷歌翻译
算法公平吸引了机器学习社区越来越多的关注。文献中提出了各种定义,但是它们之间的差异和联系并未清楚地解决。在本文中,我们回顾并反思了机器学习文献中先前提出的各种公平概念,并试图与道德和政治哲学,尤其是正义理论的论点建立联系。我们还从动态的角度考虑了公平的询问,并进一步考虑了当前预测和决策引起的长期影响。鉴于特征公平性的差异,我们提出了一个流程图,该流程图包括对数据生成过程,预测结果和诱导的影响的不同类型的公平询问的隐式假设和预期结果。本文展示了与任务相匹配的重要性(人们希望执行哪种公平性)和实现预期目的的手段(公平分析的范围是什么,什么是适当的分析计划)。
translated by 谷歌翻译
在过去的两年中,Covid-19-19的到来引起的动荡继续带来新的挑战。在这次COVID-19大流行期间,需要快速鉴定感染患者和计算机断层扫描(CT)图像中感染区域的特定描述。尽管已迅速建立了深层监督的学习方法,但图像级和像素级标签的稀缺性以及缺乏可解释的透明度仍然阻碍了AI的适用性。我们可以识别受感染的患者并以极端的监督描绘感染吗?半监督的学习表明,在有限的标记数据和足够的未标记数据下,表现出了有希望的表现。受到半监督学习的启发,我们提出了一种模型不可静止的校准伪标记策略,并将其应用于一致性正则化框架下,以生成可解释的识别和描述结果。我们通过有限的标记数据和足够的未标记数据或弱标记数据的组合证明了模型的有效性。广泛的实验表明,我们的模型可以有效利用有限的标记数据,并为临床常规中的决策提供可解释的分类和分割结果。该代码可从https://github.com/ayanglab/xai covid-11获得。
translated by 谷歌翻译
作为世界各地的Covid-19大流行横冲直撞,对视频会议激增的需求。为此,实时肖像分割成为一种流行的功能,以取代会议参与者的背景。虽然为从生命场景中提取身体姿势的分段提供了具有丰富的数据集,模型和算法,但纵向分割尚未在视频会议上下文中覆盖很好。为了促进该领域的进步,我们介绍了名为PP-Humanseg的开源解决方案。这项工作是第一个构建一个大型视频纵向数据集,其中包含291个会议场景中的291个视频,其中14K细微的帧和扩展到多摄像头电话。此外,我们提出了一种用于语义分割的新型语义连接感知学习(SCL),其引入了语义连接感知丢失,以提高来自连接的角度的分段结果。我们提出了一种超轻量级模型,具有SCL的实际肖像分割,实现IOO之间的最佳权衡和推理的速度。我们数据集的广泛评估展示了SCL和我们的模型的优越性。源代码可在https://github.com/paddlepaddle/paddleseg上获得。
translated by 谷歌翻译
本文研究了当人类决策受试者对部署的机器学习模型做出反应时的转让性。在我们的设置中,代理或用户对应于从分发$ \ Mathcal {d} $中绘制的示例$(x,y)$,并将面对型号$ h $,其分类结果$ h(x)$。代理商可以修改$ x $以适应$ h $,这将导致$(x,y)$的分销变化。因此,当培训$ H $时,学习者将需要考虑部署输出模型时随后的``诱发''分布。我们的表述是由部署的机器学习模型与人类代理相互作用的应用程序的动机,并最终将面临响应式和交互式数据分布。我们通过研究如何在可用源分布(数据)上训练的模型将模型的可传递性进行正式讨论,将转化为诱导域的性能。由于诱导的域移位,我们为性能差距提供了上限,以及分类器必须在源训练分布或诱导的目标分布上遭受的权衡方面的下限。我们为两个流行的域适应设置提供了进一步的实例化分析,并具有协变量转移和目标转移。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译